1,497 research outputs found

    Statistical physics of independent component analysis

    Full text link
    Statistical physics is used to investigate independent component analysis with polynomial contrast functions. While the replica method fails, an adapted cavity approach yields valid results. The learning curves, obtained in a suitable thermodynamic limit, display a first order phase transition from poor to perfect generalization.Comment: 7 pages, 1 figure, to appear in Europhys. Lett

    Diesel engine fuel injection monitoring using acoustic measurements and independent component analysis

    Get PDF
    Air-borne acoustic based condition monitoring is a promising technique because of its intrusive nature and the rich information contained within the acoustic signals including all sources. However, the back ground noise contamination, interferences and the number of Internal Combustion Engine ICE vibro-acoustic sources preclude the extraction of condition information using this technique. Therefore, lower energy events; such as fuel injection, are buried within higher energy events and/or corrupted by background noise. This work firstly investigates diesel engine air-borne acoustic signals characteristics and the benefits of joint time-frequency domain analysis. Secondly, the air-borne acoustic signals in the vicinity of injector head were recorded using three microphones around the fuel injector (120° apart from each other) and an Independent Component Analysis (ICA) based scheme was developed to decompose these acoustic signals. The fuel injection process characteristics were thus revealed in the time-frequency domain using Wigner-Ville distribution (WVD) technique. Consequently the energy levels around the injection process period between 11 and 5 degrees before the top dead center and of frequency band 9 to 15 kHz are calculated. The developed technique was validated by simulated signals and empirical measurements at different injection pressure levels from 250 to 210 bars in steps of 10 bars. The recovered energy levels in the tested conditions were found to be affected by the injector pressure settings

    Finding Exogenous Variables in Data with Many More Variables than Observations

    Full text link
    Many statistical methods have been proposed to estimate causal models in classical situations with fewer variables than observations (p<n, p: the number of variables and n: the number of observations). However, modern datasets including gene expression data need high-dimensional causal modeling in challenging situations with orders of magnitude more variables than observations (p>>n). In this paper, we propose a method to find exogenous variables in a linear non-Gaussian causal model, which requires much smaller sample sizes than conventional methods and works even when p>>n. The key idea is to identify which variables are exogenous based on non-Gaussianity instead of estimating the entire structure of the model. Exogenous variables work as triggers that activate a causal chain in the model, and their identification leads to more efficient experimental designs and better understanding of the causal mechanism. We present experiments with artificial data and real-world gene expression data to evaluate the method.Comment: A revised version of this was published in Proc. ICANN201

    Independent component analysis: algorithms and applications

    Get PDF
    A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of nongaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject

    Accurate and robust image superresolution by neural processing of local image representations

    Get PDF
    Image superresolution involves the processing of an image sequence to generate a still image with higher resolution. Classical approaches, such as bayesian MAP methods, require iterative minimization procedures, with high computational costs. Recently, the authors proposed a method to tackle this problem, based on the use of a hybrid MLP-PNN architecture. In this paper, we present a novel superresolution method, based on an evolution of this concept, to incorporate the use of local image models. A neural processing stage receives as input the value of model coefficients on local windows. The data dimension-ality is firstly reduced by application of PCA. An MLP, trained on synthetic se-quences with various amounts of noise, estimates the high-resolution image data. The effect of varying the dimension of the network input space is exam-ined, showing a complex, structured behavior. Quantitative results are presented showing the accuracy and robustness of the proposed method

    Solvent content of protein crystals from diffraction intensities by Independent Component Analysis

    Full text link
    An analysis of the protein content of several crystal forms of proteins has been performed. We apply a new numerical technique, the Independent Component Analysis (ICA), to determine the volume fraction of the asymmetric unit occupied by the protein. This technique requires only the crystallographic data of structure factors as input.Comment: 9 pages, 2 figures, 1 tabl

    Identification of Economic Shocks by Inequality Constraints in Bayesian Structural Vector Autoregression

    Get PDF
    Theories often make predictions about the signs of the effects of economic shocks on observable variables, thus implying inequality constraints on the parameters of a structural vector autoregression (SVAR). We introduce a new Bayesian procedure to evaluate the probabilities of such constraints, and, hence, to validate the theoretically implied economic shocks. We first estimate a SVAR, where the shocks are identified by statistical properties of the data, and subsequently label these statistically identified shocks by the Bayes factors calculated from their probabilities of satisfying given inequality constraints. In contrast to the related sign restriction approach that also makes use of theoretically implied inequality constraints, no restrictions are imposed. Hence, it is possible that only a subset or none of the theoretically implied shocks can be labelled. In the latter case, we conclude that the data do not lend support to the theory implying the signs of the effects in question. We illustrate the method by empirical applications to the crude oil market, and U.S. monetary policy.Peer reviewe

    Causal discovery with general non-linear relationships using non-linear ICA

    Get PDF
    We consider the problem of inferring causal relationships between two or more passively observed variables. While the problem of such causal discovery has been extensively studied, especially in the bivariate setting, the majority of current methods assume a linear causal relationship, and the few methods which consider non-linear relations usually make the assumption of additive noise. Here, we propose a framework through which we can perform causal discovery in the presence of general non-linear relationships. The proposed method is based on recent progress in non-linear independent component analysis (ICA) and exploits the non-stationarity of observations in order to recover the underlying sources. We show rigorously that in the case of bivariate causal discovery, such non-linear ICA can be used to infer causal direction via a series of independence tests. We further propose an alternative measure for inferring causal direction based on asymptotic approximations to the likelihood ratio, as well as an extension to multivariate causal discovery. We demonstrate the capabilities of the proposed method via a series of simulation studies and conclude with an application to neuroimaging data
    corecore